Agentic Browser

Documentation

Back to Home
Home Projects Agentic Browser Configuration Management

Configuration Management

Table of Contents#

  1. Introduction

  2. Project Structure

  3. Core Components

  4. Architecture Overview

  5. Detailed Component Analysis

  6. Dependency Analysis

  7. Performance Considerations

  8. Troubleshooting Guide

  9. Conclusion

  10. Appendices

Introduction#

This document explains the Configuration Management system across the Agentic Browser system. It covers how environment variables are loaded and used, how LLM providers are configured and validated, how service credentials and tokens are managed in the extension, and how configuration flows from environment variables to runtime UI inputs. It also documents the configuration hierarchy, fallback mechanisms, development versus production differences, and best practices for secure handling of sensitive data.

Project Structure#

Configuration spans three primary areas:

  • Backend core configuration and LLM provider selection

  • API server bootstrap and routing

  • Extension-side authentication, token lifecycle, and UI-driven configuration inputs

graph TB subgraph "Backend" CFG["core/config.py
Environment variables, logging"] LLM["core/llm.py
Provider configs, validation, client init"] API["api/main.py
FastAPI app, router mounts"] MAIN["main.py
Entry point, mode selection"] end subgraph "Extension" WXT["extension/wxt.config.ts
Permissions, manifest"] AUTH["useAuth.ts
OAuth, token refresh, storage"] KEYUI["ApiKeySection.tsx
UI for API key input"] MAP["agent-map.ts
Service endpoints"] end MAIN --> API API --> LLM CFG --> API CFG --> LLM AUTH --> API KEYUI --> LLM MAP --> API WXT --> AUTH

Diagram sources

Section sources

Core Components#

  • Environment configuration loader and defaults

    • Loads environment variables from a .env file and sets defaults for environment, debug mode, backend host/port, and Google API key.

    • Provides a logger factory that respects the computed log level.

    • Reference: core/config.py

  • LLM provider configuration and initialization

    • Centralized provider registry with per-provider class, environment variable names, default models, and parameter mappings.

    • Validation logic ensures required keys/base URLs are present depending on provider.

    • Supports multiple backends: Google, OpenAI-compatible, Anthropic, Ollama, DeepSeek, OpenRouter.

    • Reference: core/llm.py, core/llm.py

  • API server bootstrap and routing

    • FastAPI application definition and router mounts for various services.

    • Reference: api/main.py

  • Entry point and mode selection

    • CLI entry point supports running as API or MCP server, with optional non-interactive default to API.

    • Reference: main.py

  • Extension configuration and permissions

Section sources

Architecture Overview#

The configuration architecture follows a layered approach:

  • Environment variables are loaded early in the process and influence logging, backend host/port, and default Google API key.

  • LLM configuration is provider-centric with explicit environment variable requirements and fallbacks.

  • The extension manages service credentials via OAuth and local storage, exposing UI controls for API key input and provider/model selection.

  • Service endpoints are declared in the extension and mounted in the API server.

sequenceDiagram participant Env as "Environment (.env)" participant CoreCfg as "core/config.py" participant LLM as "core/llm.py" participant ExtAuth as "useAuth.ts" participant ExtKeyUI as "ApiKeySection.tsx" participant API as "api/main.py" Env-->>CoreCfg : "Load variables" CoreCfg-->>LLM : "Provide defaults (e.g., GOOGLE_API_KEY)" ExtAuth->>ExtAuth : "OAuth flow, token refresh" ExtKeyUI->>LLM : "User-selected provider/model" API-->>LLM : "Invoke LLM client for requests"

Diagram sources

Detailed Component Analysis#

Environment Configuration System#

  • Variable loading

    • Loads .env variables at import time.

    • Sets environment and debug flags with sensible defaults.

    • Defines backend host and port with defaults suitable for local development.

    • Extracts a default Google API key for convenience.

    • Reference: core/config.py

  • Logging configuration

    • Computes logging level based on debug flag and applies it globally.

    • Exposes a logger factory to ensure consistent logging across modules.

    • Reference: core/config.py

  • Development vs Production differences

    • Debug mode toggles logging verbosity.

    • Host/port defaults target local development; adjust for production deployments.

    • Reference: core/config.py

  • Configuration validation

    • Logging level is derived from environment variables; misconfiguration affects observability.

    • Reference: core/config.py

Section sources

LLM Provider Configuration and Fallback Mechanisms#

  • Provider registry

    • Centralized mapping of provider identifiers to LangChain classes, environment variables, default models, and parameter mappings.

    • Includes support for Google, OpenAI-compatible, Anthropic, Ollama, DeepSeek, and OpenRouter.

    • Reference: core/llm.py

  • Initialization logic

    • Validates provider existence and model availability.

    • Resolves API key from constructor argument or environment variable depending on provider requirements.

    • Resolves base URL from constructor, override, or environment variable; raises explicit errors if missing.

    • Builds parameter dictionary and initializes the underlying LLM client.

    • Reference: core/llm.py

  • Fallback mechanisms

    • Uses provider-specific defaults for model names.

    • Applies base URL overrides for certain providers (e.g., DeepSeek, OpenRouter).

    • Raises descriptive errors when required credentials or URLs are missing.

    • Reference: core/llm.py

  • BYOKeys approach

    • Supports passing API keys directly to the constructor for providers that require them.

    • For providers that do not require API keys (e.g., Ollama), passing an API key is tolerated with a warning.

    • Reference: core/llm.py

  • Example provider selection flow

flowchart TD Start(["Initialize LLM"]) --> CheckProvider["Lookup provider config"] CheckProvider --> ProviderFound{"Provider supported?"} ProviderFound --> |No| ErrorProvider["Raise error: unsupported provider"] ProviderFound --> |Yes| SetModel["Set model name (arg or default)"] SetModel --> ModelValid{"Model name valid?"} ModelValid --> |No| ErrorModel["Raise error: no model"] ModelValid --> |Yes| ResolveKey["Resolve API key (env or arg)"] ResolveKey --> KeyProvided{"Key available?"} KeyProvided --> |No| ErrorKey["Raise error: missing API key"] KeyProvided --> |Yes| ResolveBaseURL["Resolve base URL (arg/override/env)"] ResolveBaseURL --> URLValid{"Base URL valid?"} URLValid --> |No| ErrorURL["Raise error: missing base URL"] URLValid --> |Yes| BuildParams["Build client params"] BuildParams --> InitClient["Initialize LLM client"] InitClient --> Done(["Ready"])

Diagram sources

Section sources

Service Credentials, Authentication, and Token Management#

  • Extension OAuth and token lifecycle

    • Uses browser identity APIs to initiate OAuth with Google, exchange authorization code for tokens, and persist user data in local storage.

    • Implements automatic token refresh when nearing expiration and manual refresh capability.

    • Handles token status display and user feedback.

    • Reference: extension/entrypoints/sidepanel/hooks/useAuth.ts

  • Service endpoint mapping

  • UI-driven API key input

  • Permissions and host access

    • Manifest defines broad permissions and host permissions required by the extension.

    • Reference: extension/wxt.config.ts

Section sources

Configuration Hierarchy: Environment Variables to Runtime UI Inputs#

Section sources

Configuration Validation and Error Handling#

  • LLM initialization validation

    • Explicit checks for unsupported providers, missing model names, missing API keys, and missing base URLs.

    • Descriptive error messages guide users to set environment variables or pass arguments.

    • Reference: core/llm.py

  • Logging and diagnostics

    • Logger factory ensures consistent logging levels derived from environment variables.

    • Reference: core/config.py

  • Frontend error handling

Section sources

Configuration Hot-Reloading and Dynamic Updates#

Section sources

Dependency Analysis#

graph LR DOTENV[".env"] --> CFG["core/config.py"] CFG --> API["api/main.py"] CFG --> LLM["core/llm.py"] AUTH["useAuth.ts"] --> API KEYUI["ApiKeySection.tsx"] --> LLM MAP["agent-map.ts"] --> API

Diagram sources

Section sources

Performance Considerations#

  • Avoid repeated environment parsing: load .env once at startup and reuse cached values.

  • Minimize LLM client reinitialization: cache the LLM instance and reuse it across requests.

  • Reduce network calls: batch token refreshes and avoid unnecessary re-authentication.

  • Logging overhead: tune logging level in production to reduce I/O.

Troubleshooting Guide#

  • Missing API key for a provider

    • Symptom: Initialization error indicating missing API key for the selected provider.

    • Resolution: Set the appropriate environment variable or pass the key directly to the LLM constructor.

    • Reference: core/llm.py

  • Missing base URL for a provider

    • Symptom: Initialization error indicating missing base URL for the selected provider.

    • Resolution: Set the provider’s base URL environment variable or pass it explicitly.

    • Reference: core/llm.py

  • Unsupported provider

    • Symptom: Error indicating an unsupported provider identifier.

    • Resolution: Choose a supported provider from the registry.

    • Reference: core/llm.py

  • OAuth failure in extension

  • Token refresh issues

  • Logging verbosity

    • Symptom: Too verbose or too quiet logs.

    • Resolution: Adjust debug flag to toggle logging level.

    • Reference: core/config.py

Section sources

Conclusion#

The Agentic Browser configuration system combines environment-driven defaults with explicit provider configuration and runtime UI inputs. The backend enforces strict validation for LLM providers, while the extension manages authentication and token lifecycles. By following the outlined best practices and troubleshooting steps, teams can reliably deploy and operate the system across development and production environments.

Appendices#

Environment Variables and Defaults#

Deployment Scenarios and Examples#

  • Development

    • Run the API server locally with default host/port and enable debug logging.

    • Reference: core/config.py, main.py

  • Production

Best Practices for Sensitive Configuration#

  • Store API keys and secrets in environment variables; avoid committing them to source control.

  • Use provider-specific environment variables and avoid embedding secrets in code.

  • Limit extension permissions to those required for functionality.

  • Reference: core/llm.py, extension/wxt.config.ts